Adversarial Deep Learning for Over-the-Air Spectrum Poisoning Attacks
نویسندگان
چکیده
An adversarial deep learning approach is presented to launch over-the-air spectrum poisoning attacks. A transmitter applies on its sensing results predict idle time slots for data transmission. In the meantime, an adversary learns transmitter's behavior (exploratory attack) by building another neural network when transmissions will succeed. The falsifies (poisons) over air transmitting during short period of transmitter. Depending whether uses as test make transmit decisions or training retrain network, either it fooled into making incorrect (evasion algorithm retrained incorrectly future (causative attack). Both attacks are energy efficient and hard detect (stealth) compared jamming long transmission period, substantially reduce throughput. dynamic defense designed that deliberately makes a small number (selected confidence score channel classification) manipulate adversary's data. This effectively fools (if any) helps sustain throughput with without present.
منابع مشابه
Robust Deep Reinforcement Learning with Adversarial Attacks
This paper proposes adversarial attacks for Reinforcement Learning (RL) and then improves the robustness of Deep Reinforcement Learning algorithms (DRL) to parameter uncertainties with the help of these attacks. We show that even a naively engineered attack successfully degrades the performance of DRL algorithm. We further improve the attack using gradient information of an engineered loss func...
متن کاملAdversarial Examples: Attacks and Defenses for Deep Learning
With rapid progress and great successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial examples are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The ...
متن کاملTowards Deep Learning Models Resistant to Adversarial Attacks
Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides a broad and unifying view on much of the prior work...
متن کاملAuror: defending against poisoning attacks in collaborative deep learning systems
Deep learning in a collaborative setting is emerging as a cornerstone of many upcoming applications, wherein untrusted users collaborate to generate more accurate models. From the security perspective, this opens collaborative deep learning to poisoning attacks, wherein adversarial users deliberately alter their inputs to mis-train the model. These attacks are known for machine learning systems...
متن کاملTargeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access many security-sensitive applications like payment apps. Such usages of deep learning systems provide the adversaries with sufficient incentives to perform attack...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Mobile Computing
سال: 2021
ISSN: ['2161-9875', '1536-1233', '1558-0660']
DOI: https://doi.org/10.1109/tmc.2019.2950398